Frontiers in Computational Neuroscience
○ Frontiers Media SA
All preprints, ranked by how well they match Frontiers in Computational Neuroscience's content profile, based on 53 papers previously published here. The average preprint has a 0.10% match score for this journal, so anything above that is already an above-average fit. Older preprints may already have been published elsewhere.
Naze, S.; Kozloski, J.
Show abstract
Large scale brain models encompassing cortico-cortical, thalamo-cortical and basal ganglia processing are fundamental to understand the brain as an integrated system in healthy and disease conditions but are complex to analyze and interpret. Neuronal processes are typically segmented by region and modality in order to explain an experimental observation at a given scale, but integrative frameworks linking scales and modalities are scarce. Here, we present a set of functional requirements used to evaluate the recently developed large-scale brain model against a learning task involving coordinated learning between cortical and sub-cortical systems. The original Information Based Exchange Brain model (IBEx) is decomposed into functionally relevant subsystems, and each subsystem is analyzed and tuned independently and with regard to its relevant functional requirements. Intermediate conclusions are made for each subsystems according to the constraints imposed by these requirements. Subsystems are then re-introduced into the global framework. The relationship between the global framework and phenotypes associated with Huntingtons disease is then discussed and the framework considered in the context of other state-of-the-art integrative brain models.
Azam, M. A.; Kumbhare, D.; Hadimani, R.; Toms, J.; Baron, M.; Atulasimha, J.
Show abstract
A modified computational model of pallidal receiving ventral oral posterior (Vop) thalamocortical motor relay neurons was adapted based on in vivo observations in our rodent model. The model accounts for different input neuronal firing patterns in the primary motor output nucleus of basal ganglia, the globus pallidus interna (GPi) and subsequently generate Vop outputs as observed in vivo under different conditions. Hyperpolarizing input de-inactivates its T-type calcium channel and sets the thalamic neurons in the preferable burst firing mode over a tonic mode and induces low threshold spikes (LTS). In the hyperpolarized state, both spontaneously and in response to excitatory (e.g. corticothalamic) inputs, burst spiking occurs on the crest of the LTS. By selecting and determining the timing and extent of opening of thalamic T-type calcium channels via GABAergic hyperpolarizing input, the GPi precisely regulates Vop-cortical burst motor signaling. Different combinations of tonic, burst, irregular tonic and irregular burst inputs from GPi were used to verify our model. In vivo data obtained from recordings in the entopedunucular nucleus (EP; rodent equivalent of GPi) from resting head restrained healthy and dystonic rats were used to simulate the influences of different inputs from GPi. In all cases, GPi neuronal firing patterns are demonstrated to act as a firing mode selector for thalamic Vop neurons.
Yoshihara, M.; Isomura, T.
Show abstract
Active inference has been proposed as a unified explanation of perception and action. Previous work has shown that canonical neural networks that minimize shared Helmholtz energy are cast as performing active inference of the external environment. However, how animals flexibly adapt to newly encountered environments remains to be fully addressed. To investigate the brains generalizability and adaptability to new environments, this work develops canonical neural networks that employ multiple policy matrices in parallel. We demonstrate that the proposed model can recapitulate the prism adaptation--a form of visuomotor adaptation--under an arm-reaching task. Using policy matrices pretrained under various target positions, these networks could transfer previous experiences and exhibit faster adaptation to the prism-shifted environment than the naive networks. Furthermore, after-effects were observed following the removal of simulated prism glasses. These results suggest the biological plausibility and utility of the proposed model, providing insights into the adaptive capabilities of the brain.
Zhao, M.; Wang, N.; Jiang, X.; Ma, X.; Ma, H.; He, G.; Du, K.; Ma, L.; Huang, T.
Show abstract
The behavior of an organism is profoundly influenced by the complex interplay between its brain, body, and environment. Existing data-driven models focusing on either the brain or the body-environment separately. A model that integrates these two components is yet to be developed. Here, we present MetaWorm, an integrative data-driven model of a widely studied organism, C. elegans. This model consists of two sub-models: the brain model and the body & environment model. The brain model was built by multi-compartment models with realistic morphology, connectome, and neural population dynamics based on experimental data. Simultaneously, the body & environment model employed a lifelike body and a 3D physical environment, facilitating easy behavior quantification. Through the closed-loop interaction between two sub-models, MetaWorm faithfully reproduced the realistic zigzag movement towards attractors observed in C. elegans. Notably, MetaWorm is the first model to achieve seamless integration of detailed brain, body, and environment simulations, enabling unprecedented insights into the intricate relationships between neural structures, neural activities, and behaviors. Leveraging this model, we investigated the impact of neural system structure on both neural activities and behaviors. Consequently, MetaWorm can enhance our understanding of how the brain controls the body to interact with its surrounding environment.
Tamura, H.
Show abstract
Neurons in the cerebral cortex are organized topographically. In the primate visual cortex, neighboring neurons often respond to similar stimulus parameters, such as receptive field position, orientation, color, and spatial frequency. Preferred stimulus parameters change smoothly across the cortical surface. If such topographic organization plays an important role in computation, it is likely to emerge in artificial neural networks. In this study, a multistream convolutional neural network was constructed in which filters in the first convolutional layer were arranged in a two-dimensional filter matrix according to their output connections. The network was trained using supervised learning for image classification. Although adjacent filters in the filter matrix can develop any structure in principle, they acquire similar degrees of orientation and color selectivity. Moreover, they prefer similar orientations, hues, and spatial frequency. The similarity decreases with distance between filters in the matrix. Furthermore, neural-network model instances that have a strong relationship between filter distance and filter-property similarity performed better than those with a weak relationship. These results suggest that topographic organization emerges spontaneously in an artificial neural network and plays an important role in model performance, suggesting the importance of topographic organization for computations performed by artificial and biological neural networks.
Agueci, L.; Gajic, N. A. C.
Show abstract
Adaptation is a fundamental aspect of motor learning. Intelligent systems must adapt to perturbations in the environment while simultaneously maintaining stable memories. Classic work has argued that this trade-off could be resolved by complementary learning systems operating at different speeds; yet the mechanisms enabling coordination between slow and fast systems remain unknown. Here, we propose a multi-region distributed learning model in which learning is shared between two populations of neurons with distinct roles and structures: a recurrent controller network which stores a slowly evolving memory, and a feedforward adapter network that rapidly learns to respond to perturbations in the environment. In our model, supervised learning in the adapter produces a predictive error signal that simultaneously tutors consolidation in the controller through a local plasticity rule. Our model offers insight into the mechanisms that may support distributed computations in the motor cortex and cerebellum during motor adaptation.
Levy, W. B.; Baxter, R.
Show abstract
The development of many feedforward pathways in the brain, from sensory inputs to neocortex, have been studied and modeled extensively, but the development of feedback connections, which tend to occur after the development of feedforward pathways, have received less attention. The abundance of feedback connections within neocortex and between neocortex and thalamus suggests that understanding feedback connections is crucial to understanding connectivity and signal processing in the brain. It is well known that many neural layers are arranged topologically with respect to sensory input, and many neural models impose a symmetry of connections between layers, commonly referred to as reciprocal connectivity. Here, we are concerned with how such reciprocal, feedback connections develop so that the topology of the sensory input is preserved. We focus on feedback connections from layer 6 of visual area V1 to primary neurons in the Lateral Geniculate Nucleus (LGN). The proposed model is based on the hypothesis that feedback connections from V1-L6 to LGN use voltage-activated T-channels to appropriately establish and modify synapses in spite of unavoidable temporal delays. We also hypothesize that developmental spindling relates to synaptogenesis and memory consolidation.
Remmelzwaal, L. A.; Ellis, G. F. R.; Tapson, J.
Show abstract
In this paper we introduce a novel Salience Affected Artificial Neural Network (SANN) that models the way neuromodulators such as dopamine and noradrenaline affect neural dynamics in the human brain by being distributed diffusely through neocortical regions. This allows one-time learning to take place through strengthening entire patterns of activation at one go. We present a model that accepts a salience signal, and returns a reverse salience signal. We demonstrate that we can tag an image with salience with only a single training iteration, and that the same image will then produces the highest reverse salience signal during classification. We explore the effects of salience on learning via its effect on the activation functions of each node, as well as on the strength of weights in the network. We demonstrate that a salience signal improves classification accuracy of the specific image that was tagged with salience, as well as all images in the same class, while penalizing images in other classes. Results are validated using 5-fold validation testing on MNIST and Fashion MNIST datasets. This research serves as a proof of concept, and could be the first step towards introducing salience tagging into Deep Learning Networks and robotics.
Zhao, Q.; Xu, J.; Li, D.; Wu, X.; Zhang, K.; Chu, C.; Fan, L.
Show abstract
NeuroAI develops the interplay of neuroscience and artificial intelligence, especially on visual processing. Human visual system organizes objects based on a representational hierarchy. However, it remains unclear whether this hierarchy arises from visual or semantic information. One hypothesis posits that the visual system is structured around statistical regularities of visual information. Here, we test this hypothesis using the THINGS datasets and pure-visual deep neural networks (DNN). We constructed a low-dimensional object space based on multiple abstract object properties, reflecting statistical patterns of visual regularities. By applying voxelwise encoding models, we identified clusters in the higher visual cortex based on their property tuning, and they were found to support specific object categories. These clusters serve as the middle level to reveal a property-cluster-object hierarchical organization. Subsequently, we investigated whether this hierarchical structure could be captured by a self-supervised DNN. Through activity similarity analysis, we mapped the brain clusters onto the DNN and independently found that the DNNs clusters exhibited distinct property tuning and influenced the classification accuracy of corresponding object categories, mirroring the effects observed in the human brain. Our results demonstrate similar hierarchical structures in the human brain and self-supervised DNN, suggesting that the visual regularities shape neural architecture of visual system. This study highlights the great potential of neural computational model in neuroscience study. Index Terms Visual Processing, Abstract Property, Hierarchical Representation, Self-supervised Visual DNN
Pettine, W. W.; Louie, K.; Murray, J. D.; Wang, X.-J.
Show abstract
We investigated two-attribute, two-alternative decision-making in a hierarchical neural network with three layers: an input layer encoding choice alternative attribute values; an intermediate layer of modules processing separate attributes; and a choice layer producing the decision. Depending on intermediate layer excitatory-inhibitory (E/I) tone, the network displays three distinct regimes characterized by linear (I), convex (II) or concave (III) choice indifference curves. In regimes I and II, each options attribute information is additively integrated. To maximize reward at low environmental uncertainty, the system should operate in regime I. At high environmental uncertainty, reward maximization is achieved in regime III, with each attribute module selecting a favored alternative, and the ultimate decision based upon comparison between outputs of attribute processing modules. We then use these principles to examine multi-attribute decisions with autism-related deficits in E/I balance, leading to predictions of different choice patterns and overall performance between autism and neurotypicals.
Zhao, C.; Cui, H.
Show abstract
In many voluntary movement, neural activities ranging from cortex to spinal cord can be roughly described as the stages of motor intention, preparation, and execution. Recent advances in neuroscience have proposed many theories to understand how motor intention can be transformed into action following these stages, but they still lack a holistic and mechanistic theory to account for the whole process. Here, we try to formulate this question by abstracting two underlying principles: 1) the neural system is specializing the final motor command through a hierarchical network by multitudes of training supervised by the action feedback ("practice often"); 2) prediction is a general mechanism throughout the whole process by providing feedback control for each local layer ("always get ready"). Here we present a theoretical model to regularize voluntary motor control based on these two principles. The model features hierarchical organization and is composed of spiking building blocks based on the previous work in predictive coding and adaptive control theory. By simulating our manual interception paradigm, we show that the network could demonstrate motor preparation and execution, generate desired output trajectory following intention inputs, and exhibit comparable cortical and endpoint dynamics with the empirical data.
Yoshida, N.; Kanazawa, H.; Kuniyoshi, Y.
Show abstract
Homeostasis is a fundamental property for the survival of animals. Computational reinforcement learning provides a theoretically sound framework for learning autonomous agents. However, the definition of a unified motivational signal (i.e., reward) for integrated survival behaviours has been largely underexplored. Here, we present a novel neuroscience-inspired algorithm for synthesising robot survival behaviour without the need for complicated reward design and external feedback. Our agent, the Embodied Neural Homeostat, was trained solely with feedback generated by its internal physical state and optimised its behaviour to stabilise these internal states: homeostasis. To demonstrate the effectiveness of our concept, we trained the agent in a simulated mechano-thermal environment and tested it in a real robot. We observed the synthesis of integrated behaviours, including walking, navigating to food, resting to cool down the motors, and shivering to warm up the motors, through the joint optimisation for thermal and energy homeostasis. The Embodied Neural Homeostat successfully achieved homeostasis-based integrated behaviour synthesis, which has not previously been accomplished at the motor control level. This demonstrates that homeostasis can be a motivating principle for integrated behaviour generation in robots and can also elucidate the behavioural principles of living organisms.
Pham, T. Q.; Yoshimoto, T.; Niwa, H.; Takahashi, H. K.; Uchiyama, R.; Matsui, T.; Anderson, A. K.; Sadato, N.; Chikazoe, J.
Show abstract
Humans and now computers can derive subjective valuations from sensory events although such transformation process is essentially unknown. In this study, we elucidated unknown neural mechanisms by comparing convolutional neural networks (CNNs) to their corresponding representations in humans. Specifically, we optimized CNNs to predict aesthetic valuations of paintings and examined the relationship between the CNN representations and brain activity via multivoxel pattern analysis. Primary visual cortex and higher association cortex activities were similar to computations in shallow CNN layers and deeper layers, respectively. The vision-to-value transformation is hence proved to be a hierarchical process which is consistent with the principal gradient that connects unimodal to transmodal brain regions (i.e. default mode network). The activity of the frontal and parietal cortices was approximated by goal-driven CNN. Consequently, representations of the hidden layers of CNNs can be understood and visualized by their correspondence with brain activity-facilitating parallels between artificial intelligence and neuroscience.
Oess, T.; Ernst, M. O.; Neumann, H.
Show abstract
The development of spatially registered auditory maps in the external nucleus of the inferior colliculus in young owls and their maintenance in adult animals is visually guided and evolves dynamically. To investigate the underlying neural mechanisms of this process, we developed a model of stabilized neoHebbian correlative learning which is augmented by an eligibility signal and a temporal trace of activations. This 3-component learning algorithm facilitates stable, yet flexible, formation of spatially registered auditory space maps composed of conductance-based topographically organized neu- ral units. Spatially aligned maps are learned for visual and auditory input stimuli that arrive in temporal and spatial registration. The reliability of visual sensory inputs can be used to regulate the learning rate in the form of an eligibility trace. We show that by shifting visual sensory inputs at the onset of learning the topography of auditory space maps is shifted accordingly. Simulation results explain why a shift of auditory maps in mature animals is possible only if corrections are induced in small steps. We conclude that learning spatially aligned auditory maps is flexibly controlled by reliable visual sensory neurons and can be formalized by a biological plausible unsupervised learning mechanism.
Chu, T.; Wu, Y.; Qiu, W.; Jiang, Z.; Burgess, N.; HONG, B.; WU, S.
Show abstract
Localized space coding and phase coding are two distinct strategies responsible, respectively, for representing abstract structure and sensory observations in neural cognitive maps. In spatial representation, localized space coding is implemented by place cells in the hippocampus (HPC), while phase coding is implemented by grid cells in the medial entorhinal cortex (MEC). Both strategies have their own advantages and disadvantages, and neither of them meets the requirement of representing space robustly and efficiently in the brain. Here, we show that through reciprocal connections between HPC and MEC, place and grid cells can complement each other to overcome their respective shortcomings. Specifically, we build a coupled network model, in which a continuous attractor neural network (CANN) with position coordinate models place cells, while multiple CANNs with phase coordinates model grid cell modules with varying spacings. The reciprocal connections between place and grid cells encode the correlation prior between the sensory cues processed by HPC and MEC, respectively. Using this model, we show that: 1) place and grid cells interact to integrate sensory cues in a Bayesian manner; 2) place cells complement grid cells in coding accuracy by eliminating non-local errors of the latter; 3) grid cells complement place cells in coding efficiency by enlarging the number of environmental maps stored stably by the latter. We demonstrate that the coupled network model explains the seemingly contradictory experimental findings about the remapping phenomena of place cells when grid cells are either inactivated or depolarized. This study gives us insight into understanding how the brain employs collaborative localized and phase coding to realize both robust and efficient information representation.
Yonekura, S.; Cueto, J.; Kanazawa, H.; Atsumi, N.; Hirabayashi, S.; Iwamoto, M.; Kuniyoshi, Y.
Show abstract
Respiration and emotional stimuli modulate cognitive ability and the reaction time to generate bodily movement. To understand mechanisms for emotion-respiration-cognition coupling, first, we considered a schematic feed-forward neural network, in which neurons was biased by respiratory-relevant sensory input and the activation function of a neuron was modulated by a neuromodulator, such as norepinephrine (NE). Furthermore, we assumed that the neural model received a stimulus input and generated a response action upon the activity of the output neuron exceeding a certain threshold. Time-to-respond (TTR) was equivalently modulated by the intensity of the input bias and the neuromodulator strength for small action execution threshold; however, it was dominantly modulated by only the neuromodulator for high threshold. Second, we implemented a comprehensive model comprising a cardio-respiration relevant neuromechanical-gas system, a respiratory central pattern generator (CPG), NE dynamics to modulate neurocognitive dynamics, and a locus coeruleus (LC) circuit, which was the primary nucleus for controlling NE. The LC neurons received pCO2 or synaptic current from an inspiratory neurons, which resulted in shortened TTR by a stimulus input during inhalation. By contrast, upon receiving pulmonary stretch information, the TTR was shortened by a stimulus input during exhalation. In humans, TTR is shortened when a fear-related stimulus is presented during inhalation, and likewise, TTR is weakly-shortened when surprise-related stimulus is presented during exhalation. Hence, we conclude that emotional stimuli in humans may switch the gating strategies of information and the inflow to LC to change the attention or behavior strategy.
McVeigh, K.; Singh, A.; Erdogmus, D.; Feldman Barrett, L.; Satpute, A.
Show abstract
This paper uses a generative neural network architecture combining unsupervised (generative) and supervised (discriminative) models with a model comparison strategy to evaluate assumptions about the mappings between brain states and behavior. Most modeling in cognitive neuroscience publications assume a one-to-one brain-behavior relationship that is linear, but never test these assumptions or the consequences of violating them. We systematically varied these assumptions using simulations of four ground-truth brain-behavior mappings that involve progressively more complex relationships, ranging from one-to-one linear mappings to many-to-one nonlinear mappings. We then applied our Variational AutoEncoder-Classifier framework to the simulations to show how it accurately captured diverse brain-behavior mappings,provided evidence regarding which assumptions are supported by the data, and illustrated the problems that arise when assumptions are violated. This integrated approach offers a reliable foundation for cognitive neuroscience to effectively model complex neural and behavioral processes, allowing more justified conclusions about the nature of brain-behavior mappings.
Lorenzi, R. M.; De Grazia, M.; Gandini Wheeler-Kingshott, C. A. M.; Palesi, F.; D'Angelo, E. U.; Casellato, C.
Show abstract
A mean field model (MFM) is a mesoscopic description of neuronal population dynamics that can reduce the complexity of neural microcircuits into equations preserving key functional properties. The generation of a MFM is a complex mathematical process that starts with the incorporation of single neuron input/output relationships and local connectivity. Once neuron electroresponsiveness and synaptic properties are defined, in principle, the process can be automatized. Here we develop a tool for automatic MFM derivation from biophysically grounded spiking networks (Auto-MFM) by performing micro-to-mesoscale parameter remapping, estimating input/output relationships specific for different neuronal populations (i.e., transfer functions), and optimizing transfer function parameters. Auto-MFM was tested using a spiking cerebellar circuit as a generative model. The cerebellar MFM derived with Auto-MFM accurately reproduced cerebellar population dynamics of the corresponding spiking network, matching mean and time-varying firing rates across a wide range of stimulation patterns. Auto-MFM allowed us to model and explore physiological and pathological circuit variants; indeed, it was used to map ataxia-related structural connectivity alterations of the cerebellar network, in which Purkinje cells with simplified dendritic structure altered the cerebellar connectivity. Furthermore, Auto-MFM was used to create a library of cerebellar MFMs by sweeping the level of the excitatory conductance at mossy fiber - granule cell synapse, which is altered in several neuropathologies. Auto-MFM is thus proving a flexible and powerful tool to generate region-specific MFMs of healthy and pathological brain networks to be embedded in brain digital models.
Zhang, G.; Cui, Y.; Zhang, Y.; Cao, H.; Zhou, G.; Shu, H.; Yao, D.; Xia, Y.; Chen, K.; Guo, D.
Show abstract
Periodic visual stimulation can induce stable steady-state visual evoked potentials (SSVEPs) distributed in multiple brain regions and has potential applications in both neural engineering and cognitive neuroscience. However, the underlying dynamic mechanisms of SSVEPs at the whole-brain level are still not completely understood. Here, we addressed this issue by simulating the rich dynamics of SSVEPs with a large-scale brain model designed with constraints of neuroimaging data acquired from the human brain. By eliciting activity of the occipital areas using an external periodic stimulus, our model was capable of replicating both the spatial distributions and response features of SSVEPs that were observed in experiments. In particular, we confirmed that alpha-band (8-12 Hz) stimulation could evoke stronger SSVEP responses; this frequency sensitivity was due to nonlinear entrainment and resonance, and could be modulated by endogenous factors in the brain. Interestingly, the stimulus-evoked brain networks also exhibited significant superiority in topological properties near this frequency-sensitivity range, and stronger SSVEP responses were demonstrated to be supported by more efficient functional connectivity at the neural activity level. These findings not only provide insights into the mechanistic understanding of SSVEPs at the whole-brain level but also indicate a bright future for large-scale brain modeling in characterizing the complicated dynamics and functions of the brain.
Zhang, B.; Liu, J.
Show abstract
Experience replay, characterized by the sequential reactivation of hippocampal place cells, has been proposed to consolidate past experiences and simulate future scenarios, thereby constructing cognitive maps to guide action. However, the role of experience replay in the formation of hexagonal patterns of entorhinal grid cells - known to serve as the metrics of cognitive map - remains largely unknown. Here, we used continuous attractor models to simulate the formation of multi-scale hexagonal patterns, and investigated the regularity of generated patterns by directly modulating experience replay of hippocampal place cells in awake state. We found that reverse replay significantly increased the regularity of small-scale hexagonal patterns compared to conditions with no replay, forward replay, and shuffled replay of past experiences. In contrast, large-scale hexagonal patterns emerged spontaneously, independent of experience replay. Further analysis revealed that the efficiency of reverse replay in hexagonal pattern formation was influenced by the interaction between grid scales and the amount of past experience. Specifically, reverse replay maintained excitatory and inhibitory grid cell activity during pattern translation when the amount of past experience was limited, making it particularly efficient for the rapid formation of small-scale hexagonal patterns. In summary, our results suggest a potential link between experience replay and the formation of multi-scale hexagonal patterns from a computational perspective. These findings may provide insights into the role of grid cell signals in visual attention and the rapid adaptation of hexagonal patterns in scene transition.